Skip to main content

Module Details - AI and Apps

AI-focused modules

Module keyChart pathInstall whenIf skipped
lensServiceSummaryoci://ghcr.io/gravitate-health/charts/lens-service-summaryYou need generated summary lensesSummary lens unavailable
lensServicePlainLanguageoci://ghcr.io/gravitate-health/charts/lens-service-plain-languageYou need plain-language adaptationPlain-language lens unavailable
chatWithEpioci://ghcr.io/gravitate-health/charts/chat-with-epiYou need chatbot backendChat backend unavailable
chatbotInterfaceoci://ghcr.io/gravitate-health/charts/chatbot-interfaceYou need chatbot web/API interfaceChat UI/API gateway unavailable

Application/support modules

Module keyChart pathInstall whenIf skipped
supportingMaterialManageroci://ghcr.io/gravitate-health/charts/supporting-material-managerYou need external supporting-material handling and bucket flowSupporting material workflow unavailable
provenanceEngineoci://ghcr.io/gravitate-health/charts/provenance-engineYou need provenance/content trust capabilitiesNo provenance engine APIs

External AI backend requirements

Several AI modules depend on external model/LLM backends that must be provisioned separately. These are not included in the Helmfile and must be reachable from the platform.

Chat modules

chatWithEpi

Requires an Ollama-compatible LLM service endpoint.

  • Config key: config.modelUrl
  • Example: https://ollama.lst.tfo.upm.es
  • Before deploying: Ensure the Ollama instance is reachable and healthy from the cluster
  • Helmfile override: --state-values-set modules.chatWithEpi=false if backend is unavailable

Lens services

lensServiceSummary

Requires an external summarization/LLM service (typically Ollama or similar).

  • Config keys: Refer to the chart's values.yaml for endpoint/model configuration
  • Before deploying: Provision the model service and confirm network reachability

lensServicePlainLanguage

Requires an external model service for NLP/adaptation tasks.

  • Config key: Model endpoint URL (check chart for exact key)
  • Before deploying: Provision the model service

Configuring AI backends

Once external backends are ready, override the endpoint URLs during deployment.

Via Helmfile command

helmfile -f helmfile.yaml -e full -l phase=ai \
--state-values-set chat-with-epi.config.modelUrl=https://your-ollama.example.com \
apply

Via values overlay

Create a file values/ai-backends.yaml.gotmpl:

chat-with-epi:
config:
modelUrl: {{ .Values.aiBackends.chatModelUrl | quote }}

lens-service-summary:
config:
modelUrl: {{ .Values.aiBackends.summaryModelUrl | quote }}

lens-service-plain-language:
config:
modelUrl: {{ .Values.aiBackends.nlpModelUrl | quote }}

Then deploy:

helmfile -f helmfile.yaml -e full \
--state-values-set aiBackends.chatModelUrl=https://ollama.example.com \
--state-values-set aiBackends.summaryModelUrl=https://ollama.example.com \
apply

Dependency notes

  • AI/chat modules assume core FHIR and focusing modules are healthy.
  • chatbotInterface expects service-account and auth assumptions from the platform baseline.
  • Critical: External model backends (Ollama, etc.) must be provisioned and reachable before installing AI modules.
  • If a backend is unavailable, disable the dependent module or pre-provision the backend and then apply with updated config values.